Skip to content

misc: Update gemm/batched gemm cubins from trtllm-gen, gemm header refactor#2740

Merged
aleozlx merged 12 commits intoflashinfer-ai:mainfrom
jimmyzho:update-cubin
Apr 7, 2026
Merged

misc: Update gemm/batched gemm cubins from trtllm-gen, gemm header refactor#2740
aleozlx merged 12 commits intoflashinfer-ai:mainfrom
jimmyzho:update-cubin

Conversation

@jimmyzho
Copy link
Copy Markdown
Contributor

@jimmyzho jimmyzho commented Mar 10, 2026

📌 Description

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Bug Fixes

    • Restored GEMM problem-dimension validity checks for correct sizing and runtime behavior.
  • Behavioral Changes

    • Updated FP8 kernel naming so heuristic selection matches configured kernels.
  • Chores

    • Updated artifact paths/checksums and enabled additional header download/include-path handling for JIT/cubin flow.
  • Tests

    • Slightly tightened numeric tolerance thresholds for MoE FP4/FP8 tests.
  • Removed

    • Deleted the exported GEMM interface and several GEMM-related headers and parameter/trait declarations.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 10, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Updated TRT‑LLM GEMM integration: kernel name strings adjusted, GEMM problem-dimension validity fields enabled, many exported GEMM headers/types removed, JIT cubin/header download re-enabled with updated artifact paths/checksums, and small test tolerance tweaks.

Changes

Cohort / File(s) Summary
GEMM runners
csrc/trtllm_gemm_runner.cu, csrc/trtllm_low_latency_gemm_runner.cu
Updated hard-coded FP8 kernel name substrings (cga1x1x1c1x1x1), and enabled mProblemDimensions.mValidM/mValidN/mValidK assignments — review kernel name matching and tactic validity usage.
Exported GEMM headers (deleted)
include/flashinfer/trtllm/gemm/trtllmGen_gemm_export/Enums.h, .../GemmInterface.h, .../GemmOptions.h, .../KernelParams.h, .../KernelParamsDecl.h, .../KernelTraits.h, .../TmaDescriptor.h
Entire interface/type/utility headers removed — major public API and device-parameter construction deleted. Audit compile/link targets and any code relying on these declarations.
Artifacts & checksums
flashinfer/artifacts.py
Replaced TRTLLM_GEN_BMM and TRTLLM_GEN_GEMM artifact subpaths and their SHA256 checksums; update affects artifact retrieval and validation.
JIT cubin/header loader
flashinfer/jit/cubin_loader.py, flashinfer/jit/gemm/core.py
Added trtllm/gen/SparsityDecl.h to header lists; re-enabled download_trtllm_headers and added FLASHINFER_CUBIN_DIR to include paths for generated modules — impacts cubin/header caching and JIT include resolution.
Tests
tests/moe/test_trtllm_gen_fused_moe.py
Tightened percent tolerances: FP4 & FP8 per-tensor 0.925→0.92; FP8 block-scale 0.80→0.79. Verify test stability.

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Suggested reviewers

  • aleozlx
  • yzh119
  • jiahanc
  • cyx-6
  • IwakuraRein
  • bkryu
  • nv-yunzheq
  • sricketts

Poem

🐰 I hopped through kernels, snug and bold,
Renamed a string, made valid dims hold.
Headers fetched, checksums set with care,
Deleted old types — lighter air.
Carrot patch complete — compile and share.

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description contains only the template with unchecked checkboxes and placeholder comments, with no actual description of what changes are made, why they're needed, or related issues. Provide a concrete description of the changes (kernel name updates, artifact hash updates, header removal rationale), explain why these updates are needed, link related issues if applicable, and document any testing performed.
Docstring Coverage ⚠️ Warning Docstring coverage is 20.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The PR title references two distinct changes: cubin updates from trtllm-gen and a gemm header refactor. The changeset confirms both occur (cubin artifact paths updated, header files removed/refactored), making the title relevant to the actual changes.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on updating and refining the integration of TRTLLM-Gen GEMM cubins, particularly enhancing support for FP8 operations with a new DeepSeek option. It streamlines kernel selection, enables previously disabled problem dimension validations, and addresses JIT compilation issues to ensure the use of the latest and most optimized kernels. Minor adjustments to test tolerances were also made.

Highlights

  • TRTLLM-Gen Cubin Updates: Updated the artifact paths and checksums for TRTLLM-Gen Batched GEMM (BMM) and GEMM cubins, ensuring the latest kernels are used.
  • FP8 DeepSeek Support: Introduced a useDeepSeekFp8 option in TrtllmGenGemmRunnerOptions and integrated it into the GEMM configuration selection logic, enabling specific FP8 kernel optimizations.
  • Unified Kernel Selection Heuristic: Refactored the selectHeuristic function in TrtllmGenGemmRunner to use a unified approach for selecting GEMM kernels, removing specialized FP8 kernel selection logic.
  • Enabled Problem Dimension Validation: Uncommented and enabled the setting of mValidM, mValidN, and mValidK problem dimensions in createGemmData functions for both standard and low-latency GEMM runners, indicating a fix or readiness in the underlying trtllm-gen library.
  • JIT Compilation Fixes: Re-enabled the download_trtllm_headers calls for JIT compilation of GEMM modules and adjusted include paths, resolving previous issues with cubin generation.
  • Test Tolerance Adjustments: Slightly adjusted the percent tolerance values in FP4 and FP8 related tests within test_trtllm_gen_fused_moe.py.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • csrc/trtllm_gemm_runner.cu
    • Added useDeepSeekFp8 boolean field to TrtllmGenGemmRunnerOptions struct.
    • Removed the select_kernel_fp8 function.
    • Modified TrtllmGenGemmRunner constructor to include useDeepSeekFp8 in configuration matching.
    • Uncommented and enabled mValidM, mValidN, mValidK assignments in multiple createGemmData calls.
    • Simplified selectHeuristic function to directly call getValidTactics.
    • Integrated useDeepSeekFp8 into trtllm_gemm and trtllm_gemm_tactics function calls.
  • csrc/trtllm_low_latency_gemm_runner.cu
    • Uncommented and enabled mValidM, mValidN, mValidK assignments in createGemmData function.
    • Updated mUseShuffledMatrixA to mUseShuffledMatrix in TrtllmLowLatencyGemmRunner configuration checks.
  • flashinfer/artifacts.py
    • Updated the artifact paths for TRTLLM_GEN_BMM and TRTLLM_GEN_GEMM.
    • Updated the checksums for TRTLLM_GEN_BMM and TRTLLM_GEN_GEMM.
  • flashinfer/jit/cubin_loader.py
    • Added trtllm/gen/SparsityDecl.h to the list of headers downloaded by download_trtllm_headers.
  • flashinfer/jit/gemm/core.py
    • Uncommented the import of download_trtllm_headers.
    • Re-enabled the download_trtllm_headers call and removed the TODO comment in gen_trtllm_gen_gemm_module.
    • Adjusted extra_include_paths in gen_trtllm_gen_gemm_module to include FLASHINFER_CUBIN_DIR.
    • Re-enabled the download_trtllm_headers call and removed the TODO comment in gen_trtllm_low_latency_gemm_module.
    • Adjusted extra_include_paths in gen_trtllm_low_latency_gemm_module to include FLASHINFER_CUBIN_DIR.
  • tests/moe/test_trtllm_gen_fused_moe.py
    • Adjusted the percent tolerance from 0.925 to 0.92 in get_tolerances for FP4 tests.
    • Adjusted the percent tolerance from 0.8 to 0.79 in get_tolerances for FP8 block-scale tests.
    • Adjusted the percent tolerance from 0.925 to 0.92 in get_tolerances for FP8 per-tensor tests.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the GEMM cubins from trtllm-gen. The changes involve updating artifact paths and checksums, re-enabling header downloads, and adjusting the C++ code to accommodate the new cubins. A key change is the removal of a hardcoded kernel selection function in favor of a more general tactic selection mechanism. Several TODOs related to trtllm-gen fixes have been addressed, and the corresponding code has been enabled. Test tolerances were also slightly adjusted, which is expected with new kernels. The changes are consistent and look correct. I have a few minor suggestions for code cleanup.

Comment thread flashinfer/artifacts.py Outdated
Comment on lines +140 to +145
# "b55211623be7f5697c5262ffd8361fc06c147bc9/batched_gemm-b3c1646-c111d7c/"
"c6de39dedecf3f6a06072082a23c959c9bdcbf11/batched_gemm-4daf11e-c111d7c/"
)
TRTLLM_GEN_GEMM: str = (
"1fddc48b7b48af33914d040051b3e2ee9ba4701e/gemm-145d1b1-9b113e3/"
# "1fddc48b7b48af33914d040051b3e2ee9ba4701e/gemm-145d1b1-9b113e3/"
"5b85f1064103f8d01b5aa574b45a355ccee0a635/gemm-4daf11e-8adb015/"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The old artifact paths for TRTLLM_GEN_BMM and TRTLLM_GEN_GEMM are commented out. It would be cleaner to remove this commented code before this pull request is merged.

Comment thread flashinfer/artifacts.py Outdated
Comment on lines +163 to +169
# "0af823880730c4f0b3832d2208fab035946694b83444410b9309db5613d60195"
"3c4070eb507f7a8391051ada620756259eae4551dd0a0d797d018d6ae5b7d39f"
)
DEEPGEMM: str = "1a2a166839042dbd2a57f48051c82cd1ad032815927c753db269a4ed10d0ffbf"
TRTLLM_GEN_GEMM: str = (
"15cb8c85dfb5eddd4f121d64cb5a718321fb55b85aa19df10ddc1329d4a726b9"
# "15cb8c85dfb5eddd4f121d64cb5a718321fb55b85aa19df10ddc1329d4a726b9"
"6ec458b818f2de27176950598292ed64816397988031d5db0c6799dffc378cf9"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The old checksums for TRTLLM_GEN_BMM and TRTLLM_GEN_GEMM are commented out. It would be cleaner to remove this commented code before this pull request is merged.

@yzh119
Copy link
Copy Markdown
Collaborator

yzh119 commented Mar 10, 2026

/bot run

@flashinfer-bot
Copy link
Copy Markdown
Collaborator

GitLab MR !396 has been created, and the CI pipeline #45774302 is currently running. I'll report back once the pipeline job completes.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
flashinfer/jit/gemm/core.py (1)

544-554: Consider extracting common header download logic.

The header download logic in gen_trtllm_gen_gemm_module (lines 544-554) and gen_trtllm_low_latency_gemm_module (lines 710-720) is identical. Consider extracting this into a helper function to reduce duplication.

♻️ Suggested helper function
def _download_gemm_headers(checksum: bytes) -> None:
    """Download TRTLLM GEMM headers to the cubin directory."""
    include_path = f"{ArtifactPath.TRTLLM_GEN_GEMM}/include"
    header_path = f"{include_path}/trtllmGen_gemm_export"
    header_dest_dir = (
        jit_env.FLASHINFER_CUBIN_DIR
        / "flashinfer"
        / "trtllm"
        / "gemm"
        / "trtllmGen_gemm_export"
    )
    download_trtllm_headers(
        "gemm", header_dest_dir, header_path, ArtifactPath.TRTLLM_GEN_GEMM, checksum
    )

Also applies to: 710-720

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@flashinfer/jit/gemm/core.py` around lines 544 - 554, Extract the duplicated
header download block from gen_trtllm_gen_gemm_module and
gen_trtllm_low_latency_gemm_module into a small helper (e.g.
_download_gemm_headers(checksum: bytes)) that builds
include_path/header_path/header_dest_dir and calls
download_trtllm_headers("gemm", header_dest_dir, header_path,
ArtifactPath.TRTLLM_GEN_GEMM, checksum); then replace the duplicated blocks in
gen_trtllm_gen_gemm_module and gen_trtllm_low_latency_gemm_module with a call to
that helper to remove duplication.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@csrc/trtllm_low_latency_gemm_runner.cu`:
- Line 172: The code references configOptions.mUseShuffledMatrix which does not
exist; replace that reference with the correct field name
configOptions.mUseShuffledMatrixA (e.g., in the conditional around
configOptions.mUseShuffledMatrix) so it matches the GemmOptions.h definition;
search for other uses of mUseShuffledMatrix in trtllm_low_latency_gemm_runner.cu
and update them to mUseShuffledMatrixA to avoid compilation errors.

---

Nitpick comments:
In `@flashinfer/jit/gemm/core.py`:
- Around line 544-554: Extract the duplicated header download block from
gen_trtllm_gen_gemm_module and gen_trtllm_low_latency_gemm_module into a small
helper (e.g. _download_gemm_headers(checksum: bytes)) that builds
include_path/header_path/header_dest_dir and calls
download_trtllm_headers("gemm", header_dest_dir, header_path,
ArtifactPath.TRTLLM_GEN_GEMM, checksum); then replace the duplicated blocks in
gen_trtllm_gen_gemm_module and gen_trtllm_low_latency_gemm_module with a call to
that helper to remove duplication.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e16a0bf2-3331-4355-bf55-e02ac48a0663

📥 Commits

Reviewing files that changed from the base of the PR and between fe06b91 and 9d28506.

📒 Files selected for processing (6)
  • csrc/trtllm_gemm_runner.cu
  • csrc/trtllm_low_latency_gemm_runner.cu
  • flashinfer/artifacts.py
  • flashinfer/jit/cubin_loader.py
  • flashinfer/jit/gemm/core.py
  • tests/moe/test_trtllm_gen_fused_moe.py

configOptions.mTransposeMmaOutput == true &&
configOptions.mLayoutA == gemm::gemm::MatrixLayout::BlockMajorK &&
configOptions.mUseShuffledMatrixA) {
configOptions.mUseShuffledMatrix) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for the actual field name in the codebase
rg -n "mUseShuffledMatrix" --type-add 'header:*.h' -t header -C2

Repository: flashinfer-ai/flashinfer

Length of output: 4636


🏁 Script executed:

rg -n "mUseShuffledMatrix" csrc/trtllm_low_latency_gemm_runner.cu -C 3

Repository: flashinfer-ai/flashinfer

Length of output: 389


Fix field name: mUseShuffledMatrix should be mUseShuffledMatrixA.

At line 172, the code references configOptions.mUseShuffledMatrix, but the GemmOptions.h header defines this field as mUseShuffledMatrixA. Using the incorrect field name will cause a compilation error.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@csrc/trtllm_low_latency_gemm_runner.cu` at line 172, The code references
configOptions.mUseShuffledMatrix which does not exist; replace that reference
with the correct field name configOptions.mUseShuffledMatrixA (e.g., in the
conditional around configOptions.mUseShuffledMatrix) so it matches the
GemmOptions.h definition; search for other uses of mUseShuffledMatrix in
trtllm_low_latency_gemm_runner.cu and update them to mUseShuffledMatrixA to
avoid compilation errors.

@flashinfer-bot
Copy link
Copy Markdown
Collaborator

[FAILED] Pipeline #45774302: 1/20 passed

@jimmyzho jimmyzho changed the title draft: update gemm cubins from trtllm-gen Update gemm cubins from trtllm-gen Mar 24, 2026
@jimmyzho
Copy link
Copy Markdown
Contributor Author

/bot run

@flashinfer-bot
Copy link
Copy Markdown
Collaborator

GitLab MR !396 has been updated with latest changes, and the CI pipeline #46911385 is currently running. I'll report back once the pipeline job completes.

@jimmyzho
Copy link
Copy Markdown
Contributor Author

/bot run

@flashinfer-bot
Copy link
Copy Markdown
Collaborator

GitLab MR !396 has been updated with latest changes, and the CI pipeline #46922352 is currently running. I'll report back once the pipeline job completes.

@jimmyzho
Copy link
Copy Markdown
Contributor Author

jimmyzho commented Apr 4, 2026

/bot run

@flashinfer-bot
Copy link
Copy Markdown
Collaborator

GitLab MR !396 has been updated with latest changes, and the CI pipeline #47676798 is currently running. I'll report back once the pipeline job completes.

@jimmyzho
Copy link
Copy Markdown
Contributor Author

jimmyzho commented Apr 4, 2026

/bot run

@flashinfer-bot
Copy link
Copy Markdown
Collaborator

GitLab MR !396 has been updated with latest changes, and the CI pipeline #47683475 is currently running. I'll report back once the pipeline job completes.

@flashinfer-bot
Copy link
Copy Markdown
Collaborator

[SUCCESS] Pipeline #47683475: 10/20 passed

@jimmyzho jimmyzho removed the run-ci label Apr 4, 2026
@jimmyzho jimmyzho added run-ci and removed ready labels Apr 4, 2026
Copy link
Copy Markdown
Collaborator

@IwakuraRein IwakuraRein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@aleozlx aleozlx enabled auto-merge (squash) April 7, 2026 17:24
@jimmyzho jimmyzho removed the run-ci label Apr 7, 2026
@jimmyzho jimmyzho added the run-ci label Apr 7, 2026
@aleozlx aleozlx merged commit 1fd6305 into flashinfer-ai:main Apr 7, 2026
28 of 37 checks passed
@jimmyzho jimmyzho deleted the update-cubin branch April 7, 2026 22:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants